601 research outputs found

    Universal relations for hybridized s- and p-wave interactions from spin-orbital coupling

    Get PDF
    In this work, we study the universal relations for one-dimensional spin-orbital-coupled fermions near both s- and p-wave resonances using effective field theory. Since the spin-orbital coupling mixes different partial waves, a contact matrix is introduced to capture the nontrivial correlation between dimers. We find the signature of the spin-orbital coupling appears at the leading order for the off-diagonal components of the momentum distribution matrix, which is proportional to 1/q³ (q is the relative momentum). We further derive the large frequency behavior of the Raman spectroscopy, which serves as an independent measurable quantity for contacts. Finally, we give an explicit example of contacts by considering a two-body problem

    Improving Visual Embeddings using Attention and Geometry Constraints

    Get PDF
    Learning a non-linear function to embed the raw data (i.e., image, video, or language) to a discriminative feature embedding space is considered a fundamental problem in the learning community. In such embedding spaces, the data with similar semantic meaning are clustered, while the data with dissimilar semantic meaning are separated. A number of practical applications can benefit from a good feature embedding, e.g., machine translation, classification/recognition, retrieval, any-shot learning, etc In this Thesis, we aim to improve the visual embeddings using attention and geometry constraints. In the first part of the Thesis, we develop two neural attention modules, which can automatically localize the informative regions within the feature map, thereby generating a discriminative feature representation for the image. An Attention in Attention (AiA) mechanism is first proposed to align the feature map along with the deep network, by modeling the interaction of inner attention and outer attention modules. Intuitively, the AiA mechanism can be understood as having an attention inside another, with the inner one determining where to focus for the outer attention module. Further, we employ explicit non-linear mappings in Reproducing Kernel Hilbert Spaces (RHKSs) to generate attention values, leading the channel descriptor of the feature map to own the representation power of second-order polynomial kernel and Gaussian kernel. In addition, the Channel Recurrent Attention (CRA) module is proposed to build a global receptive field to the feature map. The existing attention mechanisms focus on either the channel pattern or the spatial pattern of the feature map, which cannot make full use of the information in the feature map. The CRA module can jointly learn the channel and spatial patterns of the feature map and produce attention value per every element of the input feature map. This is achieved by feeding the spatial vectors to a recurrent neural network (RNN) sequentially, such that the RNN can create a global view of the feature map. In the second part, we investigate the superiority of geometry constraint for embedding learning. We first study the geometry concern of the set as an embedding for a video clip. Usually, the video embedding is optimized using triplet loss, in which the distance is calculated between clip features, such that the frame feature cannot be optimized directly. To this end, we model the video clip as a set, and employ the distance between sets in the triplet loss. Tailored for the set-aware triplet loss, a new set distance metric is also proposed to measure the hard frames in a triplet. Optimizing over set-aware triplet loss leads to a compact clip feature embedding, improving the discriminative of the video representation. Beyond the flat Euclidean embedding space, we further study a curved space, i.e., hyperbolic spaces, as image embedding spaces. In contrast to Euclidean embedding, hyperbolic embedding can encode the data's hierarchical structure, as the volume of hyperbolic space increases exponentially. However, performing basic operations for comparison in hyperbolic spaces is complex and time-consuming. For example, the similarity measure is not well-defined in hyperbolic spaces. To mitigate this issue, we introduce the positive definite (pd) kernels for hyperbolic embeddings. Specifically, we propose four pd kernels in hyperbolic spaces in conjunction with a theoretical analysis. The proposed kernels include hyperbolic tangent kernel, hyperbolic RBF kernel, hyperbolic Laplace kernel, and hyperbolic binomial kernel. We demonstrate the effectiveness of the proposed methods via a image or video person re-identification task. We also evaluate the generalization of hyperbolic kernels by few-shot learning, zero-shot learning and knowledge distillation tasks
    • …
    corecore